Collective Stability in Structured Prediction: Appendix
نویسندگان
چکیده
Note that the above does not require independence. To prove Theorem 1, it therefore suffices to bound ∑n i=1 α 2 i . Kontorovich & Ramanan (2008, Remark 2.1) showed that, if f is c-Lipschitz with respect to the Hamming metric, then ∑n i=1 α 2 i ≤ nc ‖Θ π n‖ 2 ∞. (Though the published results only prove this for countable spaces, Kontorovich later extended this analysis to continuous spaces in his thesis (2007).) If f is c-Lipschitz with respect to the normalized Hamming metric, then ∑n i=1 α 2 i ≤ c ‖Θ π n‖ 2 ∞ /n, which completes the proof.
منابع مشابه
Collective Stability in Structured Prediction: Generalization from One Example
Structured predictors enable joint inference over multiple interdependent output variables. These models are often trained on a small number of examples with large internal structure. Existing distribution-free generalization bounds do not guarantee generalization in this setting, though this contradicts a large body of empirical evidence from computer vision, natural language processing, socia...
متن کاملPAC-Bayesian Collective Stability
Recent results have shown that the generalization error of structured predictors decreases with both the number of examples and the size of each example, provided the data distribution has weak dependence and the predictor exhibits a smoothness property called collective stability. These results use an especially strong definition of collective stability that must hold uniformly over all inputs...
متن کاملImproved Generalization Bounds for Large-scale Structured Prediction
Collective inference has been shown empirically to successfully exploit the natural dependencies in relational and network data [5, 11, 12, 13]. Though many collective techniques are capable of induction, and have been shown to be asymptotically consistent [16], little to no theory exists concerning the generalization of such methods. Collective inference can sometimes be viewed as large-scale ...
متن کاملEmpirical Analysis of Collective Stability
When learning structured predictors, collective stability is an important factor for generalization. London et al. (2013) provide the first analysis of this effect, proving that collectively stable hypotheses produce less deviation between empirical risk and true risk, i.e., defect. We test this effect empirically using a collectively stable variant of maxmargin Markov networks. Our experiments...
متن کاملManaging the risk of learning: Psychological safety in work teams
This social psychological analysis explores themes of trust and collective learning in teams. I describe interpersonal risks that can inhibit collective learning, distinguish psychological safety from trust, and explain why psychological safety mitigates interpersonal risks and facilitates a structured learning process in teams. Examples from field studies in several organizational settings are...
متن کامل